12 research outputs found
On Approximating the Sum-Rate for Multiple-Unicasts
We study upper bounds on the sum-rate of multiple-unicasts. We approximate
the Generalized Network Sharing Bound (GNS cut) of the multiple-unicasts
network coding problem with independent sources. Our approximation
algorithm runs in polynomial time and yields an upper bound on the joint source
entropy rate, which is within an factor from the GNS cut. It
further yields a vector-linear network code that achieves joint source entropy
rate within an factor from the GNS cut, but \emph{not} with
independent sources: the code induces a correlation pattern among the sources.
Our second contribution is establishing a separation result for vector-linear
network codes: for any given field there exist networks for which
the optimum sum-rate supported by vector-linear codes over for
independent sources can be multiplicatively separated by a factor of
, for any constant , from the optimum joint entropy
rate supported by a code that allows correlation between sources. Finally, we
establish a similar separation result for the asymmetric optimum vector-linear
sum-rates achieved over two distinct fields and
for independent sources, revealing that the choice of field
can heavily impact the performance of a linear network code.Comment: 10 pages; Shorter version appeared at ISIT (International Symposium
on Information Theory) 2015; some typos correcte
Recommended from our members
Quadratic maximization under combinatorial constraints and related applications
Motivated primarily by restricted variants of Principal Component Analysis (PCA), we study quadratic maximization problems subject to sparsity, nonnegativity and other combinatorial constraints. Intuitively, a key technical challenge is determining the support of the optimal solution. We develop a method that can surprisingly solve the maximization exactly when the argument matrix of the quadratic objective is positive semidefinite and has constant rank. Our approach relies on a hyper-spherical transformation of the low-rank space and has complexity that scales exponentially in the rank of the input, but polynomially in the ambient dimension. Extending these observations, we describe a simpler {approximation} algorithm based on exploring the low-rank space with an [epsilon]-net, drastically improving the dependence on the ambient dimension, implying a Polynomial Time Approximation Scheme (PTAS) for inputs with rank scaling up to logarithmically in the dimension or sufficiently sharp spectral decay. We discuss extensions of our approach to jointly computing multiple principal components under combinatorial constraints, such as the problem of extracting multiple orthogonal nonnegative components, or sparse components with common or disjoint supports, and related approximate matrix factorization problems. We further extend our quadratic maximization framework to bilinear optimization problems and employ it in the context of specific applications, e.g., to develop a provable approximation algorithm for the NP-hard problem of Bipartite Correlation Clustering (BCC). Real datasets will typically produce covariance matrices that have full rank, rendering our algorithms not applicable. Our approach is to first obtain a low-rank approximation of the input data and subsequently solve the low-rank problem using our framework. Although this approach is not always suitable, from an optimization perspective it yields provable, data-dependent performance bounds that rely on the spectral decay of the input and the employed approximation technique. Interestingly, most real matrices can be well approximated by low-rank surrogates since the eigenvalues display a significant decay. Empirical evaluation shows that our algorithms have excellent performance and in many cases outperform the previous state of the art. Finally, utilizing our framework, we develop algorithms with interesting theoretical guarantees in the context of specific applications, such as approximate Orthogonal Nonnegative Matrix Factorization or Bipartite Correlation Clustering.Electrical and Computer Engineerin
Orthogonal NMF through Subspace Exploration
Abstract Orthogonal Nonnegative Matrix Factorization (ONMF) aims to approximate a nonnegative matrix as the product of two k-dimensional nonnegative factors, one of which has orthonormal columns. It yields potentially useful data representations as superposition of disjoint parts, while it has been shown to work well for clustering tasks where traditional methods underperform. Existing algorithms rely mostly on heuristics, which despite their good empirical performance, lack provable performance guarantees. We present a new ONMF algorithm with provable approximation guarantees. For any constant dimension k, we obtain an additive EPTAS without any assumptions on the input. Our algorithm relies on a novel approximation to the related Nonnegative Principal Component Analysis (NNPCA) problem; given an arbitrary data matrix, NNPCA seeks k nonnegative components that jointly capture most of the variance. Our NNPCA algorithm is of independent interest and generalizes previous work that could only obtain guarantees for a single component. We evaluate our algorithms on several real and synthetic datasets and show that their performance matches or outperforms the state of the art
Sparse principal component of a rank-deficient matrix
Summarization: We consider the problem of identifying the sparse principal component of a rank-deficient matrix. We introduce auxiliary spherical variables and prove that there exists a set of candidate index-sets (that is, sets of indices to the nonzero elements of the vector argument) whose size is polynomially bounded, in terms of rank, and contains the optimal index-set, i.e. the index-set of the nonzero elements of the optimal solution. Finally, we develop an algorithm that computes the optimal sparse principal component in polynomial time for any sparsity degree.Παρουσιάστηκε στο: Intern. Symp. Inform. Theor